perturbation stability
- North America > United States (0.14)
- North America > Canada > Ontario > Toronto (0.04)
- Europe > Switzerland (0.04)
- Asia > Middle East > Jordan (0.04)
- Asia > China (0.04)
Do Wider Neural Networks Really Help Adversarial Robustness?
Adversarial training is a powerful type of defense against adversarial examples. Previous empirical results suggest that adversarial training requires wider networks for better performances. However, it remains elusive how does neural network width affect model robustness. In this paper, we carefully examine the relationship between network width and model robustness. Specifically, we show that the model robustness is closely related to the tradeoff between natural accuracy and perturbation stability, which is controlled by the robust regularization parameter λ.
- North America > United States > California > San Diego County > San Diego (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (2 more...)
Appendix introduction
The Appendix is organized as follows: In Appendix A, we state the symbols and notation used in this paper. In Appendix B, we provide the proofs and related lemmas of Theorem 1. In Appendix C, we provide the proofs of Theorem 2. In Appendix D, we provide the proofs and related lemmas of Theorem 3. In Appendix F, we discuss several limitations of this work. Finally, in Appendix G, we discuss the societal impact of this paper. In the paper, vectors are indicated with bold small letters, matrices with bold capital letters.
- North America > United States (0.14)
- North America > Canada > Ontario > Toronto (0.04)
- Europe > Switzerland (0.04)
A Proof of Lemma 4.2 554 Lemma A.1 (Restatement of Lemma 4.2)
Lemma A.5 of [ 19 ] we have By substituting ( A.5) into ( A.1) we have, All experiments are conducted on a single NVIDIA V100. It runs on the GNU Linux Debian 4.9 operating The experiment is implemented via PyTorch 1.6.0. This makes the learning problem of CIFAR100 much harder. To demonstrate the fact that the over-fitting problem all comes from perturbation stability in Section 3.2(3), we We found this schedule is the most effective one when only training on the original CIFAR10. In this part, we provide a complete visualization for the two parts in Eqn. We test WideResNet-34 on CIFAR10 and CIFAR10.
- Asia > China (0.04)
- North America > United States > Pennsylvania > Centre County > State College (0.04)
- Asia > Middle East > Jordan (0.04)
Do Wider Neural Networks Really Help Adversarial Robustness?
Adversarial training is a powerful type of defense against adversarial examples. Previous empirical results suggest that adversarial training requires wider networks for better performances. However, it remains elusive how does neural network width affect model robustness. In this paper, we carefully examine the relationship between network width and model robustness. Specifically, we show that the model robustness is closely related to the tradeoff between natural accuracy and perturbation stability, which is controlled by the robust regularization parameter λ.